In this paper, we aim to design an efficient real-time object detector that exceeds the YOLO series and is easily extensible for many object recognition tasks such as instance segmentation and rotated object detection. To obtain a more efficient model architecture, we explore an architecture that has compatible capacities in the backbone and neck, constructed by a basic building block that consists of large-kernel depth-wise convolutions. We further introduce soft labels when calculating matching costs in the dynamic label assignment to improve accuracy. Together with better training techniques, the resulting object detector, named RTMDet, achieves 52.8% AP on COCO with 300+ FPS on an NVIDIA 3090 GPU, outperforming the current mainstream industrial detectors. RTMDet achieves the best parameter-accuracy trade-off with tiny/small/medium/large/extra-large model sizes for various application scenarios, and obtains new state-of-the-art performance on real-time instance segmentation and rotated object detection. We hope the experimental results can provide new insights into designing versatile real-time object detectors for many object recognition tasks. Code and models are released at https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet.
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
在这项研究中,我们深入研究了半监督对象检测〜(SSOD)所面临的独特挑战。我们观察到当前的探测器通常遭受3个不一致问题。 1)分配不一致,传统的分配策略对标记噪声很敏感。 2)子任务不一致,其中分类和回归预测在同一特征点未对准。 3)时间不一致,伪Bbox在不同的训练步骤中差异很大。这些问题导致学生网络的优化目标不一致,从而恶化了性能并减慢模型收敛性。因此,我们提出了一个系统的解决方案,称为一致的老师,以补救上述挑战。首先,自适应锚分配代替了基于静态的策略,该策略使学生网络能够抵抗嘈杂的psudo bbox。然后,我们通过设计功能比对模块来校准子任务预测。最后,我们采用高斯混合模型(GMM)来动态调整伪盒阈值。一致的老师在各种SSOD评估上提供了新的强大基线。只有10%的带注释的MS-Coco数据,它可以使用Resnet-50骨干实现40.0 MAP,该数据仅使用伪标签,超过了4个地图。当对完全注释的MS-Coco进行其他未标记的数据进行培训时,性能将进一步增加到49.1 MAP。我们的代码将很快开源。
translated by 谷歌翻译
我们提出了一个名为mmrotate的开源工具箱,该工具箱提供了基于深度学习的流行旋转对象检测算法的训练,推断和评估的连贯算法框架。mmrotate实现了18种最先进的算法,并支持三种最常用的角度定义方法。为了促进与旋转对象检测有关的问题的未来研究和工业应用,我们还提供了大量训练有素的模型和详细的基准测试,以深入了解旋转对象检测的性能。mmrotate将于https://github.com/open-mmlab/mmrotate公开发布。
translated by 谷歌翻译
知识共享和模型个性化是应对联邦学习(FL)中非IID挑战的重要组成部分。大多数现有的FL方法侧重于两个极端:1)学习共享模型,以使用非IID数据为所有客户提供服务,以及2)为每个客户(即个性化fl)学习个性化模型。有一个权衡解决方案,即群集或集群个性化的FL,旨在将相似的客户聚集到一个集群中,然后在集群中为所有客户学习共享模型。本文是通过将群集群集制定为可以统一现有方法的双层优化框架来重新审视群集的研究。我们提出了一个新的理论分析框架,以通过考虑客户之间的凝聚力来证明融合。此外,我们以一种称为加权聚类联合学习(WECFL)的算法体现了该框架。经验分析验证了理论结果,并证明了在拟议的集群非IID设置下提出的WECFL的有效性。
translated by 谷歌翻译
图形卷积网络对于从图形结构数据进行深入学习而变得必不可少。大多数现有的图形卷积网络都有两个大缺点。首先,它们本质上是低通滤波器,因此忽略了图形信号的潜在有用的中和高频带。其次,固定了现有图卷积过滤器的带宽。图形卷积过滤器的参数仅转换图输入而不更改图形卷积滤波器函数的曲率。实际上,除非我们有专家领域知识,否则我们不确定是否应该在某个点保留或切断频率。在本文中,我们建议自动图形卷积网络(AUTOGCN)捕获图形信号的完整范围,并自动更新图形卷积过滤器的带宽。虽然它基于图谱理论,但我们的自动环境也位于空间中,并具有空间形式。实验结果表明,AutoGCN比仅充当低通滤波器的基线方法实现了显着改善。
translated by 谷歌翻译
在联合学习(FL)中的客户端的异质性通常会在梯度空间中发生客户的知识聚合时阻碍优化融合和泛化性能。例如,客户端可以在数据分发,网络延迟,输入/输出空间和/或模型架构方面不同,这可以很容易地导致其本地梯度的未对准。为了提高异质性的容忍度,我们提出了一种新的联合原型学习(FedProto)框架,其中客户端和服务器传达了抽象类原型而不是梯度。 FEDPROTO聚合从不同客户端收集的本地原型,然后将全局原型发送回所有客户端,以规范本地模型的培训。每个客户端的训练旨在最大限度地减少本地数据上的分类错误,同时保持所产生的本地原型靠近相应的全球范围。此外,我们在非凸起目标下对FedProto的收敛速度提供了理论分析。在实验中,我们提出了一种针对异构FL定制的基准设置,FEDPROTO优于多个数据集上的几种方法。
translated by 谷歌翻译
Modeling multivariate time series has long been a subject that has attracted researchers from a diverse range of fields including economics, finance, and traffic. A basic assumption behind multivariate time series forecasting is that its variables depend on one another but, upon looking closely, it's fair to say that existing methods fail to fully exploit latent spatial dependencies between pairs of variables. In recent years, meanwhile, graph neural networks (GNNs) have shown high capability in handling relational dependencies. GNNs require well-defined graph structures for information propagation which means they cannot be applied directly for multivariate time series where the dependencies are not known in advance. In this paper, we propose a general graph neural network framework designed specifically for multivariate time series data. Our approach automatically extracts the uni-directed relations among variables through a graph learning module, into which external knowledge like variable attributes can be easily integrated. A novel mix-hop propagation layer and a dilated inception layer are further proposed to capture the spatial and temporal dependencies within the time series. The graph learning, graph convolution, and temporal convolution modules are jointly learned in an end-to-end framework. Experimental results show that our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets and achieves on-par performance with other approaches on two traffic datasets which provide extra structural information. CCS CONCEPTS• Computing methodologies → Neural networks; Artificial intelligence.
translated by 谷歌翻译
GPT-2和BERT展示了在各种自然语言处理任务上使用预训练的语言模型(LMS)的有效性。但是,在应用于资源丰富的任务时,LM微调通常会遭受灾难性的遗忘。在这项工作中,我们引入了一个协同的培训框架(CTNMT),该框架是将预训练的LMS集成到神经机器翻译(NMT)的关键。我们提出的CTNMT包括三种技术:a)渐近蒸馏,以确保NMT模型可以保留先前的预训练知识; b)动态的开关门,以避免灾难性忘记预训练的知识; c)根据计划的政策调整学习步伐的策略。我们在机器翻译中的实验表明,WMT14英语 - 德语对的CTNMT获得了最高3个BLEU得分,甚至超过了先前的最先进的预培训辅助NMT NMT的NMT。尽管对于大型WMT14英语法国任务,有400万句话,但我们的基本模型仍然可以显着改善最先进的变压器大型模型,超过1个BLEU得分。代码和模型可以从https://github.com/bytedance/neurst/tree/Master/Master/examples/ctnmt下载。
translated by 谷歌翻译
Spatial-temporal graph modeling is an important task to analyze the spatial relations and temporal trends of components in a system. Existing approaches mostly capture the spatial dependency on a fixed graph structure, assuming that the underlying relation between entities is pre-determined. However, the explicit graph structure (relation) does not necessarily reflect the true dependency and genuine relation may be missing due to the incomplete connections in the data. Furthermore, existing methods are ineffective to capture the temporal trends as the RNNs or CNNs employed in these methods cannot capture long-range temporal sequences. To overcome these limitations, we propose in this paper a novel graph neural network architecture, Graph WaveNet, for spatial-temporal graph modeling. By developing a novel adaptive dependency matrix and learn it through node embedding, our model can precisely capture the hidden spatial dependency in the data. With a stacked dilated 1D convolution component whose receptive field grows exponentially as the number of layers increases, Graph WaveNet is able to handle very long sequences. These two components are integrated seamlessly in a unified framework and the whole framework is learned in an end-to-end manner. Experimental results on two public traffic network datasets, METR-LA and PEMS-BAY, demonstrate the superior performance of our algorithm.
translated by 谷歌翻译